reduce risk
From nuclear safety to LLM security: Applying non-probabilistic risk management strategies to build safe and secure LLM-powered systems
Gutfraind, Alexander, Bier, Vicki
Large language models (LLMs) offer unprecedented and growing capabilities, but also introduce complex safety and security challenges that resist conventional risk management. While conventional probabilistic risk analysis (PRA) requires exhaustive risk enu meration and quantification, the novelty and complexity of these systems make PRA impractical, particularly against adaptive adversaries. Previous research found that risk management in various fields of engineering such as nuclear or civil engineering is often solved by generic (i.e. Here we show how emerging risks in LLM - powered systems could be met with 100+ of these non - probabilistic strategies to risk management, including risks from adaptive adversaries. The strategies are divided into five categories and are mapped to LLM secur ity (and AI safety more broadly). We also present an LLM - powered workflow for applying these strategies and other workflows suitable for solution architec ts. Overall, these strategies could contribute (despite some limitations) to security, safety and other dimensions of responsible AI.
- North America > United States > Illinois > Cook County > Chicago (0.40)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- North America > United States > New Jersey > Hudson County > Hoboken (0.04)
- (14 more...)
- Information Technology > Security & Privacy (1.00)
- Energy > Power Industry > Utilities > Nuclear (0.46)
- Information Technology > Security & Privacy (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.46)
How sure is sure? Incorporating human error into machine learning
Human error and uncertainty are concepts that many artificial intelligence systems fail to grasp, particularly in systems where a human provides feedback to a machine learning model. Many of these systems are programmed to assume that humans are always certain and correct, but real-world decision-making includes occasional mistakes and uncertainty. Researchers from the University of Cambridge, along with The Alan Turing Institute, Princeton, and Google DeepMind, have been attempting to bridge the gap between human behaviour and machine learning, so that uncertainty can be more fully accounted for in AI applications where humans and machines are working together. This could help reduce risk and improve trust and reliability of these applications, especially where safety is critical, such as medical diagnosis. The team adapted a well-known image classification dataset so that humans could provide feedback and indicate their level of uncertainty when labelling a particular image.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.25)
- North America > Canada > Quebec > Montreal (0.05)
Fairmatic raises $46M to bring AI to commercial auto insurance
With inflation sparking an increase in the cost of repairs, labor and claims, fees for insurance are similarly spiking across the board. Car insurance premiums rose 13.7% nationally over the past year, according to a study from Bankrate.com. Home insurance, meanwhile, climbed 12.1% year-on-year, Policygenius found. But Jonathan Matus argues that it doesn't have to be that way. He's the founder of Fairmatic, a company that's applying AI to -- at least according to him -- reduce risk in the car insurance industry.
- Asia > Middle East > Israel (0.07)
- North America > United States > Oregon (0.05)
- North America > United States > Colorado (0.05)
- (2 more...)
What Most People Don't Understand About AI - and The Ultimat
In other words, to say that artificial intelligence (AI) is the next step in enterprise would be an understatement. But while it is well known that AI is the next step forward, myths and misconceptions about AI and its processes still run rampant. In order for AI and ML to be used to their maximum potential to help streamline enterprise, reduce costs, reduce risk and increase profits, it needs to be implemented with precision by those with realistic expectations. In 2019, Techopedia ran a two-part survey and quiz to help us examine how well industry executives comprehend AI and machine learning (ML). The results of our survey supported one clear answer: Business and industry executives do not understand the majority of AI and ML.
Cloverleaf Analytics Hires Michael Schwabrow as Executive Vice President of Sales and Marketing
Cloverleaf Analytics, the leading provider of Insurance Intelligence solutions, today announced that Michael Schwabrow has joined the company as EVP of Sales and Marketing. Reporting to Cloverleaf President Robert Clark, Schwabrow will be responsible for Cloverleaf's go-to-market strategy and for cultivating relationships with insurers to maximize the value of Cloverleaf's Insurance Intelligence platform which includes Business Intelligence (BI), Artificial Intelligence (AI)/Machine Learning (ML), Natural Language Processing (NLP), and other technologies. Schwabrow has a long track record of collaborating with carriers and MGAs to attain meaningful digital transformation with immediate and long-term business results. With Cloverleaf, he will help carriers and MGAs to understand and unleash the real-world value of Insurance Intelligence across core business operations. "The insurance industry is like a big family, and our community is at a critical juncture for how to make smarter and more efficient decisions to reduce risk, improve product offerings, and strengthen the overall health of carrier books of business," said Schwabrow.
Forecasting Potential Misuses of Language Models for Disinformation Campaigns--and How to Reduce Risk
OpenAI researchers collaborated with Georgetown University's Center for Security and Emerging Technology and the Stanford Internet Observatory to investigate how large language models might be misused for disinformation purposes. The collaboration included an October 2021 workshop bringing together 30 disinformation researchers, machine learning experts, and policy analysts, and culminated in a co-authored report building on more than a year of research. This report outlines the threats that language models pose to the information environment if used to augment disinformation campaigns and introduces a framework for analyzing potential mitigations. As generative language models improve, they open up new possibilities in fields as diverse as healthcare, law, education and science. But, as with any new technology, it is worth considering how they can be misused.
- Media > News (1.00)
- Government (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.94)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.74)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.74)
Find out how AI will reduce risk in the movie business and help to secure a healthier and more diverse industry in the future.
The technology of artificial intelligence promises to revolutionize all aspects of our lives. Though this technology is still in its infancy, major global industries are rushing to invest in AI solutions. The movie business is no exception. But can AI really reduce risk in the movie production process or is this pure fantasy? Given that the industry will stand to save hundreds of millions of dollars per year if it can, let's examine how AI risk management could save the global movie industry from such future losses.
- Media > Film (1.00)
- Leisure & Entertainment (1.00)
Trial begins of AI scan that could reduce risk of stillbirth and other conditions
Although the placenta can be visualised using ultrasound, measuring it, and all the tiny blood vessels supplying it, is extremely time consuming, making this impractical for routine early pregnancy screening. So the University of Oxford has used machine learning to develop a tool, trained on thousands of ultrasound images where the placenta has been painstakingly marked out by hand, to automate the recognition process.
RealNetworks wins facial recognition and AI analytics contract with US Air Force
Safr from RealNetworks has earned its third Small Business Research Innovation (SBIR) deal from the United States Air Force (USAF) to enable the extension of Safr AI-powered analytics, including facial recognition, to unmanned ground vehicles (UGVs). The UGVs would be used to reduce risks in perimeter protection and domestic emergency medical services (EMS) search and rescue missions. In a news release, the company said the contract will help improve its platform to be able to operate on an NVIDIA Jetson AGX Xavier-based UGV system, with the goal of reducing the risk service members face as the Safr-enhanced UGVs will be able to detect unauthorized persons in restricted areas with face biometrics. "As a USAF military working dog handler, I have employed canines in various environments fulfilling the multi-use role of detection and deterrence. The ability to utilize UGV systems to augment K9 teams during work/rest cycles, or as an additional force, broadens security in-depth and allows operations to continue unhindered," said Air Force Technical Sergeant Dustin Cain, Non-Commissioned Officer in Charge of Police Services, 366th Security Forces Squadron, Mountain Home Air Force Base, Idaho.
- Government > Military > Air Force (1.00)
- Government > Regional Government > North America Government > United States Government (0.72)
AI Can Be Dangerous--How To Reduce Risk When Using AI
AI is far more dangerous than nukes. Elon Musk has famously said, "AI is far more dangerous than nukes." His statement has some truth to it, and it has succeeded in raising our awareness of the dangers of AI. As leaders, part of our job is to ensure that what our companies do is safe. We don't want to harm our business partners, employees, customers, or anyone else we are working with.